7 research outputs found

    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation

    Full text link
    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.Comment: To appear in 2nd Conference on Robot Learning (CoRL) 201

    Learning for Task-Oriented Grasping

    No full text
    Task-oriented grasping refers to the problem of computing stable grasps on objects that allow for a subsequent execution of a task. Although grasping objects in a task-oriented manner comes naturally to humans, it is still very challenging for robots. Take for example a service robot deployed in a household. Such a robot should be able to execute complex tasks that might include cutting a banana or flipping a pancake. To do this, the robot needs to know what and how to grasp such that the task can be executed. There are several challenges when it comes to this. First, the robot needs to be able to select an appropriate object for the task. This pertains to the theory of \emph{affordances}. Second, it needs to know how to place the hand such that the task can be executed, for example, grasping a knife on the handle when performing cutting. Finally, algorithms for task-oriented grasping should be scalable and have high generalizability over many object classes and tasks. This is challenging since there are no available datasets that contain information about mutual relations between objects, tasks and grasps.In this thesis, we present methods and algorithms for task-oriented grasping that rely on deep learning. We use deep learning to detect object \emph{affordances}, predict task-oriented grasps on novel objects and to parse human activity datasets for the purpose of transferring this knowledge to a robot.For learning affordances, we present a method for detecting functional parts given a visual observation of an object and a task. We utilize the detected affordances together with other object properties to plan for stable, task-oriented grasps on novel objects.For task-oriented grasping, we present a system for predicting grasp scores that take into account both the task and the stability. The grasps are then executed on a real-robot and refined via bayesian optimization. Finally, for parsing human activity datasets, we present an algorithm for estimating 3D hand and object poses and shapes from 2D images so that the information about the contacts and relative hand placement can be extracted. We demonstrate that we can use the information obtained in this manner to teach a robot task-oriented grasps by performing experiments with a real robot on a set of novel objects.Uppdragsorienterad greppning refererar till problemet av att rÀkna fram stabila grepp pÄ objekt som tillÄter efterföljande exekvering av ett uppdrag. Trots att fatta objekt pÄ ett uppdragsorienterat kan vara naturligt för mÀnniskor, Àr det fortfarande vÀldigt utmanande för robotar. Ta exemplet av en servicerobot i ett hushÄll. Denna sorts robot borde kuna utföra komplexa uppÄdrag som möjligtvis innefattar att hacka en banan eller vÀnda pÄ en pannkaka. För att kunna göra detta, mÄste roboten veta hur och vad den ska greppa sÄ att uppdraget kan utföras. Det finns flera utmaningar gÀllande detta. Först, mÄste roboten kunna vÀlja ett passande objekt för uppdraget. Detta tillhör till teorin om \emph{överkomlighet}.Efter detta, mÄste den veta hur den ska placera handen sÄ att uppdraget kan utföras, till exempel att greppa en kniv vid skaftet nÀr nÄgonting ska skÀras. Sist, algoritmer för uppdragsorienterad greppning bör vara uppskalningsbara och ha hög generaliseringsförmÄga över mÄnga objektklasser och uppdrag. Detta Àr utmanande eftersom det finns inga tillgÀngliga dataset som innehÄller information om ömsesidiga relationer mellan objekt,  uppdrag, och fattningar.I denna avhandling, presenterar vi metoder och algoritmer för uppdragsorienterad greppning som baseras pÄ djup maskininlÀrning. Vi anvÀnder djup maskininlÀrning för att detektera objekters \emph{överkomlighet}, förutspÄ uppdragsorienterade grepp pÄ nya objekt och analysera mÀnsklig aktivitetsdataset för ÀndamÄlet av att överföra denna kunskap till en robot. För att lÀra oss \emph{överkomlighet}, presenterar vi en metod för att detektera funktionella delar givet en visuell observation av ett objekt och ett uppdrag. Vi anvÀnder de detekterade \emph{överkomligheter} tillsammans med andra objektegenskaper för att planera ett stabilt, uppdragsorienterat grepp pÄ nya objekt. För uppdragsorienterad greppning, presenterar vi ett system för att förutspÄ greppbetyg som tar hÀnsyn till bÄde uppdraget och stabiliteten. Dessa grepp utförs sedan pÄ en fysisk robot och Àr förfinade med hjÀlp av bayesisk optimering. Sedan, för att analysera mÀnskliga aktivitetsdataset, presenterar vi en algoritm för estimering av hand och objektposer och former frÄn 2 dimensioner och utöka det till 3 dimensioner sÄ att information om kontaktpunkter och relativa handplaceringar kan extraheras. Vi demonstrerar  att vi kan anvÀnda information vi har fÄtt med denna metod att lÀra en robot uppdragsorienterade grepp genom att utföra experiment med en fysisk robot pÄ nya objekt. QC 20201001</p

    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation

    No full text
    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.QC 20190507</p

    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation

    No full text
    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.QC 20190507</p

    Global Search with Bernoulli Alternation Kernel for Task-oriented Grasping Informed by Simulation

    No full text
    We develop an approach that benefits from large simulated datasets and takes full advantage of the limited online data that is most relevant. We propose a variant of Bayesian optimization that alternates between using informed and uninformed kernels. With this Bernoulli Alternation Kernel we ensure that discrepancies between simulation and reality do not hinder adapting robot control policies online. The proposed approach is applied to a challenging real-world problem of task-oriented grasping with novel objects. Our further contribution is a neural network architecture and training pipeline that use experience from grasping objects in simulation to learn grasp stability scores. We learn task scores from a labeled dataset with a convolutional network, which is used to construct an informed kernel for our variant of Bayesian optimization. Experiments on an ABB Yumi robot with real sensor data demonstrate success of our approach, despite the challenge of fulfilling task requirements and high uncertainty over physical properties of objects.QC 20190507</p
    corecore